skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Murphy, ed., Robert"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract MotivationSpectral unmixing methods attempt to determine the concentrations of different fluorophores present at each pixel location in an image by analyzing a set of measured emission spectra. Unmixing algorithms have shown great promise for applications where samples contain many fluorescent labels; however, existing methods perform poorly when confronted with autofluorescence-contaminated images. ResultsWe propose an unmixing algorithm designed to separate fluorophores with overlapping emission spectra from contamination by autofluorescence and background fluorescence. First, we formally define a generalization of the linear mixing model, called the affine mixture model (AMM), that specifically accounts for background fluorescence. Second, we use the AMM to derive an affine nonnegative matrix factorization method for estimating fluorophore endmember spectra from reference images. Lastly, we propose a semi-blind sparse affine spectral unmixing (SSASU) algorithm that uses knowledge of the estimated endmembers to learn the autofluorescence and background fluorescence spectra on a per-image basis. When unmixing real-world spectral images contaminated by autofluorescence, SSASU greatly improved proportion indeterminacy as compared to existing methods for a given relative reconstruction error. Availability and implementationThe source code used for this paper was written in Julia and is available with the test data at https://github.com/brossetti/ssasu. 
    more » « less
  2. Abstract MotivationSynapses are essential to neural signal transmission. Therefore, quantification of synapses and related neurites from images is vital to gain insights into the underlying pathways of brain functionality and diseases. Despite the wide availability of synaptic punctum imaging data, several issues are impeding satisfactory quantification of these structures by current tools. First, the antibodies used for labeling synapses are not perfectly specific to synapses. These antibodies may exist in neurites or other cell compartments. Second, the brightness of different neurites and synaptic puncta is heterogeneous due to the variation of antibody concentration and synapse-intrinsic differences. Third, images often have low signal to noise ratio due to constraints of experiment facilities and availability of sensitive antibodies. These issues make the detection of synapses challenging and necessitates developing a new tool to easily and accurately quantify synapses. ResultsWe present an automatic probability-principled synapse detection algorithm and integrate it into our synapse quantification tool SynQuant. Derived from the theory of order statistics, our method controls the false discovery rate and improves the power of detecting synapses. SynQuant is unsupervised, works for both 2D and 3D data, and can handle multiple staining channels. Through extensive experiments on one synthetic and three real datasets with ground truth annotation or manually labeling, SynQuant was demonstrated to outperform peer specialized unsupervised synapse detection tools as well as generic spot detection methods. Availability and implementationJava source code, Fiji plug-in, and test data are available at https://github.com/yu-lab-vt/SynQuant. Supplementary informationSupplementary data are available at Bioinformatics online. 
    more » « less